This paper seeks to solve the difficult nonlinear problem in financial markets on the complex system theory and the nonlinear
dynamics principle, with the data-model-concept-practice issue-oriented reconstruction of the phase space by the high frequency
trade data. In theory, we have achieved the differentiable manifold geometry configuration, discovered the Yang-Mills functional
in financial markets, obtained a meaningful conserved quantity through corresponding space-time non-Abel localization gauge
symmetry transformation, and derived the financial solitons, which shows that there is a strict symmetry between manifold
fiber bundle and guage field in financial markets. In practical applications of financial markets, we have repeatedly carried
out experimental tests in a fluctuant evolvement, directly simulating and validating the existence of solitons by researching
the price fluctuations (society phenomena) using the same methods and criterion as in natural science and in actual trade
to test the stock Guangzhou Proprietary and the futures Fuel Oil in China. The results demonstrate that the financial solitons
discovered indicates that there is a kind of new substance and form of energy existing in financial trade markets, which likely
indicates a new science paradigm in the economy and society domains beyond physics.
相似文献
Abstract I reflect upon the development of nonlinear time series analysis since 1990 by focusing on five majorareas of development. These areas include the interface between nonlinear time series analysis and chaos,thenonparametric/semiparametric approach,nonlinear state space modelling,financial time series and nonlinearmodelling of panels of time series. 相似文献
This article discusses problems of validating classification models especially in datasets where sample sizes are small and the number of variables is large. It describes the use of percentage correctly classified (%CC) as an indicator for success of a classification model. For small datasets, %CC should not be used uncritically and its interpretation depends on sample size. It illustrates the use of a common classification method, discriminant partial least squares (D-PLS) on a randomly generated dataset of 200 samples and 200 variables.
An aim of the classifier is to determine whether the null hypothesis (there is no distinction between two classes) can be rejected. Autoprediction gives an 84.5% CC. It is shown that, if there is variable selection, it must be performed independently on the training set to obtain a CC close to 50% on the test set; otherwise, over-optimistic and false conclusions can be reached about the ability to classify samples into groups.
Finally, two aims of determining the quality of a model are frequently confused, namely optimisation (often used to determine the most appropriate number of components in a model) and independent validation; to overcome this, the data should be split into three groups.
There are often difficulties with model building if validation and optimisation have been done on different groups of samples, especially using iterative methods, each group being modelled using properties, such as a different number of components or different variables. 相似文献
Some questions emerged from electronic data processing of molecular structures (graphs) and its fragments have been considered in this work. Quantitative estimations of subgraph positions in molecular graphs are presented and some properties of their maximal common subgraphs are described. 相似文献